custom instruction
Prioritize Economy or Climate Action? Investigating ChatGPT Response Differences Based on Inferred Political Orientation
Karadal, Pelin, Kekulluoglu, Dilara
Large Language Models (LLMs) distinguish themselves by quickly delivering information and providing personalized responses through natural language prompts. However, they also infer user demographics, which can raise ethical concerns about bias and implicit personalization and create an echo chamber effect. This study aims to explore how inferred political views impact the responses of ChatGPT globally, regardless of the chat session. We also investigate how custom instruction and memory features alter responses in ChatGPT, considering the influence of political orientation. We developed three personas (two politically oriented and one neutral), each with four statements reflecting their viewpoints on DEI programs, abortion, gun rights, and vaccination. We convey the personas' remarks to ChatGPT using memory and custom instructions, allowing it to infer their political perspectives without directly stating them. We then ask eight questions to reveal differences in worldview among the personas and conduct a qualitative analysis of the responses. Our findings indicate that responses are aligned with the inferred political views of the personas, showing varied reasoning and vocabulary, even when discussing similar topics. We also find the inference happening with explicit custom instructions and the implicit memory feature in similar ways. Analyzing response similarities reveals that the closest matches occur between the democratic persona with custom instruction and the neutral persona, supporting the observation that ChatGPT's outputs lean left.
- North America > United States (1.00)
- Europe > Austria > Vienna (0.14)
- Asia > Middle East > Palestine > Gaza Strip > Gaza Governorate > Gaza (0.05)
- (15 more...)
- Law > Civil Rights & Constitutional Law (1.00)
- Health & Medicine > Therapeutic Area (1.00)
- Government > Immigration & Customs (1.00)
- (4 more...)
Accelerating HDC-CNN Hybrid Models Using Custom Instructions on RISC-V GPUs
Matsumi, Wakuto, Mian, Riaz-Ul-Haque
Machine learning based on neural networks has advanced rapidly, but the high energy consumption required for training and inference remains a major challenge. Hyperdimensional Computing (HDC) offers a lightweight, brain-inspired alternative that enables high parallelism but often suffers from lower accuracy on complex visual tasks. To overcome this, hybrid accelerators combining HDC and Convolutional Neural Networks (CNNs) have been proposed, though their adoption is limited by poor generalizability and programmability. The rise of open-source RISC-V architectures has created new opportunities for domain-specific GPU design. Unlike traditional proprietary GPUs, emerging RISC-V-based GPUs provide flexible, programmable platforms suitable for custom computation models such as HDC. In this study, we design and implement custom GPU instructions optimized for HDC operations, enabling efficient processing for hybrid HDC-CNN workloads. Experimental results using four types of custom HDC instructions show a performance improvement of up to 56.2 times in microbenchmark tests, demonstrating the potential of RISC-V GPUs for energy-efficient, high-performance computing.
- North America > United States > California > Alameda County > Berkeley (0.05)
- North America > United States > California > Orange County > Irvine (0.04)
- North America > United States > Arizona > Maricopa County > Phoenix (0.04)
- (2 more...)
OpenAI Designed GPT-5 to Be Safer. It Still Outputs Gay Slurs
OpenAI is trying to make its chatbot less annoying with the release of GPT-5. And I'm not talking about adjustments to its synthetic personality that many users have complained about. Before GPT-5, if the AI tool determined it couldn't answer your prompt because the request violated OpenAI's content guidelines, it would hit you with a curt, canned apology. Now, ChatGPT is adding more explanations. OpenAI's general model spec lays out what is and isn't allowed to be generated.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
SANGAM: SystemVerilog Assertion Generation via Monte Carlo Tree Self-Refine
Gupta, Adarsh, Mali, Bhabesh, Karfa, Chandan
Recent advancements in the field of reasoning using Large Language Models (LLMs) have created new possibilities for more complex and automatic Hardware Assertion Generation techniques. This paper introduces SANGAM, a SystemVerilog Assertion Generation framework using LLM-guided Monte Carlo Tree Search for the automatic generation of SVAs from industry-level specifications. The proposed framework utilizes a three-stage approach: Stage 1 consists of multi-modal Specification Processing using Signal Mapper, SPEC Analyzer, and Waveform Analyzer LLM Agents. Stage 2 consists of using the Monte Carlo Tree Self-Refine (MCTSr) algorithm for automatic reasoning about SVAs for each signal, and finally, Stage 3 combines the MCTSr-generated reasoning traces to generate SVA assertions for each signal. The results demonstrated that our framework, SANGAM, can generate a robust set of SVAs, performing better in the evaluation process in comparison to the recent methods.
- Africa > Mali (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > India > Assam > Guwahati (0.04)
Epistemic Alignment: A Mediating Framework for User-LLM Knowledge Delivery
Clark, Nicholas, Shen, Hua, Howe, Bill, Mitra, Tanushree
LLMs increasingly serve as tools for knowledge acquisition, yet users cannot effectively specify how they want information presented. When users request that LLMs "cite reputable sources," "express appropriate uncertainty," or "include multiple perspectives," they discover that current interfaces provide no structured way to articulate these preferences. The result is prompt sharing folklore: community-specific copied prompts passed through trust relationships rather than based on measured efficacy. We propose the Epistemic Alignment Framework, a set of ten challenges in knowledge transmission derived from the philosophical literature of epistemology, concerning issues such as evidence quality assessment and calibration of testimonial reliance. The framework serves as a structured intermediary between user needs and system capabilities, creating a common vocabulary to bridge the gap between what users want and what systems deliver. Through a thematic analysis of custom prompts and personalization strategies shared on online communities where these issues are actively discussed, we find users develop elaborate workarounds to address each of the challenges. We then apply our framework to two prominent model providers, OpenAI and Anthropic, through content analysis of their documented policies and product features. Our analysis shows that while these providers have partially addressed the challenges we identified, they fail to establish adequate mechanisms for specifying epistemic preferences, lack transparency about how preferences are implemented, and offer no verification tools to confirm whether preferences were followed. For AI developers, the Epistemic Alignment Framework offers concrete guidance for supporting diverse approaches to knowledge; for users, it works toward information delivery that aligns with their specific needs rather than defaulting to one-size-fits-all approaches.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.04)
- North America > United States > Ohio (0.04)
- Europe > Netherlands > South Holland > Dordrecht (0.04)
- Asia > India (0.04)
KWT-Tiny: RISC-V Accelerated, Embedded Keyword Spotting Transformer
Al-Qawlaq, Aness, M, Ajay Kumar, John, Deepu
University College Dublin, Ireland Abstract -- This paper explores the adaptation of Transformer - based models for edge devices through the quantis ation and hardware acceleration of the ARM Keyword Transformer (KWT) model on a RISC - V platform. The model was targeted to run on 64kB RAM in bare - metal C using a custom - developed edge AI library. KWT - 1 was retrained to be 369 times smaller, with only a 10 % loss in accuracy through reducing output classes from 35 to 2. The retraining and quantis ation reduced model size from 2.42 MB to 1.65 kB. The integration of custom RISC - V instructions that accelerated GELU and SoftMax operations enabled a 5x speedup and thus ~5x power reduction in inference, with inference clock cycle counts decreasing from 26 million to 5.5 million clock cycles while incurring a small area overhead of approximately 29 % . The results demonstrate a viable method for porting and accelerating Transformer - based models in low - power IoT devices.
- Europe > Ireland > Leinster > County Dublin > Dublin (0.24)
- Oceania > Australia > Victoria > Melbourne (0.04)
- North America > United States > Virginia (0.04)
- (3 more...)
RISC-V R-Extension: Advancing Efficiency with Rented-Pipeline for Edge DNN Processing
Kim, Won Hyeok, Kim, Hyeong Jin, Han, Tae Hee
The proliferation of edge devices necessitates efficient computational architectures for lightweight tasks, particularly deep neural network (DNN) inference. Traditional NPUs, though effective for such operations, face challenges in power, cost, and area when integrated into lightweight edge devices. The RISC-V architecture, known for its modularity and open-source nature, offers a viable alternative. This paper introduces the RISC-V R-extension, a novel approach to enhancing DNN process efficiency on edge devices. The extension features rented-pipeline stages and architectural pipeline registers (APR), which optimize critical operation execution, thereby reducing latency and memory access frequency. Furthermore, this extension includes new custom instructions to support these architectural improvements. Through comprehensive analysis, this study demonstrates the boost of R-extension in edge device processing, setting the stage for more responsive and intelligent edge applications.
- Asia > South Korea > Gyeonggi-do > Suwon (0.05)
- North America > United States > Nevada > Clark County > Las Vegas (0.04)
- North America > United States > Georgia > Fulton County > Atlanta (0.04)
- Asia > China > Jiangsu Province > Nanjing (0.04)
OpenAI Gives ChatGPT a Memory
The promise and peril of the internet has always been a memory greater than our own, a permanent recall of information and events that our brains can't store. More recently, tech companies have promised that virtual assistants and chatbots could handle some of the mnemonic load, by both remembering and reminding. That's what OpenAI's latest release is supposed to provide. The company is starting to roll out long-term memory in ChatGPT--a function that maintains a memory of who you are, how you work, and what you like to chat about. Called simply Memory, it's an AI personalization feature that turbocharges the "custom instructions" tool OpenAI released last July.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.85)
ChatGPT-4 with Code Interpreter can be used to solve introductory college-level vector calculus and electromagnetism problems
Kumar, Tanuj, Kats, Mikhail A.
ChatGPT-4 with Code Interpreter can be used to solve introductory college-level vector calculus and electromagnetism problems Tanuj Kumar (tanuj.kumar@wisc.edu) Executive summary: We evaluated three modes of ChatGPT -- 3.5, 4, and 4 with Code Interpreter -- on a set of college-level engineering-math and electromagnetism problems, such as those often given to sophomore electrical engineering majors. We selected a set of 13 problems without first testing them with ChatGPT, and had ChatGPT solve them multiple times, using a fresh instance (chat) of ChatGPT each time. The problems range from elementary to medium-level. We were strict in our evaluation of ChatGPT's performance, marking a solution as incorrect if even a small part of the solution was wrong. Our major conclusions are: ChatGPT-4 with Code Interpreter (ChatGPT-4/CI), recently renamed Advanced Data Analysis, was able to satisfactorily solve most problems we tested most of the time. Qualitatively, one could give ChatGPT-4/CI a solid passing grade in introductory engineering math and electromagnetics.
Best prompts to get the most out of an AI chatbot
Many people are turning to chatbots as search engines. Kurt "The CyberGuy" Knutsson explains how to use them. People love using AI chatbots to assist them with tasks or to simply answer a question that they don't know the answer to. However, a chatbot can only answer to the best of its ability, and we have to also do our part to help it answer our questions as accurately as possible. CLICK TO GET KURT'S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK TIPS, TECH REVIEWS AND EASY HOW-TO'S TO MAKE YOU SMARTER If you don't get specific enough with how you present your prompts, a chatbot could give you a generic or even incorrect answer.